RDG for DPF Zero Trust (DPF-ZT) with OVN VPC DPU service Home

DPU Provisioning

Connect to the first DPU BMC over SSH to change the BMC root's password:

Jump Node Console

Copy
Copied!
            

$ ssh root@10.0.110.201 root@10.0.110.201's password: <BMC Root Password. Default root/0penBmc. need to change first time>

Verify that the rshim service is runnging or run if not.

Jump Node Console

Copy
Copied!
            

root@dpu-bmc:~# systemctl status rshim * rshim.service - rshim driver for BlueField SoC Loaded: loaded (/usr/lib/systemd/system/rshim.service; enabled; preset: disabled) Active: active (running) since Mon 2025-07-14 13:21:34 UTC; 18h ago Docs: man:rshim(8) Process: 866 ExecStart=/usr/sbin/rshim $OPTIONS (code=exited, status=0/SUCCESS) Main PID: 874 (rshim) CPU: 2h 43min 44.730s CGroup: /system.slice/rshim.service `-874 /usr/sbin/rshim   Jul 14 13:21:34 dpu-bmc (rshim)[866]: rshim.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 14 13:21:34 dpu-bmc rshim[874]: Created PID file: /var/run/rshim.pid Jul 14 13:21:34 dpu-bmc rshim[874]: USB device detected Jul 14 13:21:38 dpu-bmc rshim[874]: Probing usb-2.1 Jul 14 13:21:38 dpu-bmc rshim[874]: create rshim usb-2.1 Jul 14 13:21:39 dpu-bmc rshim[874]: rshim0 attached   root@dpu-bmc:~# ls /dev/rshim0 boot console misc rshim   # To start the rshim, if not runnging  root@dpu-bmc:~# systemctl enable rshim root@dpu-bmc:~# systemctl start rshim root@dpu-bmc:~# systemctl status rshim root@dpu-bmc:~# ls /dev/rshim0

Repeat the step 4-10 on all your DPUs.

To authenticate with Redfish, it is necesasry to provide a password for the BMC root user.

Jump Node Console

Copy
Copied!
            

$ kubectl create secret generic -n dpf-operator-system bmc-shared-password --from-literal=password='ROOT_BMC_PASSWORD'

Create the following YAML to detect DPUDevices and DPUNodes:

ip_discovery.yaml

Copy
Copied!
            

--- apiVersion: provisioning.dpu.nvidia.com/v1alpha1 kind: DPUDiscovery metadata: name: dpu-discovery-192.168.1-10 namespace: dpf-operator-system spec: # Define the IP range to scan ipRangeSpec: ipRange: startIP: "10.0.110.201" # Replace with your start IP endIP: "10.0.110.204" # Replace with your end IP   # Optional: Set scan interval scanInterval: "3m" # Optional: Set number of workers (default is 1 per 255 IPs) workers: 1

Run the command to create DPUDevices and DPUNodes:

Jump Node Console

Copy
Copied!
            

$ kubectl apply -f ip_discovery.yaml

Verify the DPF system by ensuring that the DPUDevices exist:

Jump Node Console

Copy
Copied!
            

$ kubectl get dpudevices -n dpf-operator-system NAME AGE mt2402xz0f7x 9m mt2402xz0f80 9m mt2402xz0f8g 9m mt2402xz0f9n 9m

Verify the DPF system by ensuring that the DPUDevices exist.

Jump Node Console

Copy
Copied!
            

$ kubectl get dpunodes -n dpf-operator-system NAME AGE dpu-node-mt2402xz0f7x 9m dpu-node-mt2402xz0f80 9m dpu-node-mt2402xz0f8g 9m dpu-node-mt2402xz0f9n 9m

Add labels to DPUNodes. Set the values according to your environment.

Jump Node Console

Copy
Copied!
            

kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f7x feature.node.kubernetes.io/dpu-0-pf0-name=ens1f0np0 kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f7x feature.node.kubernetes.io/dpu-0-number-of-pfs=2 kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f7x feature.node.kubernetes.io/dpu-oob-bridge-configured="" kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f7x feature.node.kubernetes.io/dpu-enabled=true kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f7x feature.node.kubernetes.io/dpu-0-pci-address=0000-2b-00   kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f80 feature.node.kubernetes.io/dpu-0-pf0-name=ens1f0np0 kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f80 feature.node.kubernetes.io/dpu-0-number-of-pfs=2 kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f80 feature.node.kubernetes.io/dpu-oob-bridge-configured="" kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f80 feature.node.kubernetes.io/dpu-enabled=true kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f80 feature.node.kubernetes.io/dpu-0-pci-address=0000-2b-00   kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f8g feature.node.kubernetes.io/dpu-0-pf0-name=ens1f0np0 kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f8g feature.node.kubernetes.io/dpu-0-number-of-pfs=2 kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f8g feature.node.kubernetes.io/dpu-oob-bridge-configured="" kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f8g feature.node.kubernetes.io/dpu-enabled=true kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f8g feature.node.kubernetes.io/dpu-0-pci-address=0000-2b-00   kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f9n feature.node.kubernetes.io/dpu-0-pf0-name=ens1f0np0 kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f9n feature.node.kubernetes.io/dpu-0-number-of-pfs=2 kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f9n feature.node.kubernetes.io/dpu-oob-bridge-configured="" kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f9n feature.node.kubernetes.io/dpu-enabled=true kubectl label dpunodes.provisioning.dpu.nvidia.com -n dpf-operator-system dpu-node-mt2402xz0f9n feature.node.kubernetes.io/dpu-0-pci-address=0000-2b-00

Use the following YAML to define a BFB resource that downloads the Bluefield Bitstream to a shared volume:

manifests/04-dpudeployment-installation/bfb.yaml

Copy
Copied!
            

--- apiVersion: provisioning.dpu.nvidia.com/v1alpha1 kind: BFB metadata: name: bf-bundle namespace: dpf-operator-system spec: url: $BLUEFIELD_BITSTREAM

Run the command to create the BFB:

Jump Node Console

Copy
Copied!
            

$ cat manifests/04-dpudeployment-installation/bfb.yaml | envsubst |kubectl apply -f -

© Copyright 2025, NVIDIA. Last updated on Jul 17, 2025.